1,432 research outputs found

    Belief Revision with Uncertain Inputs in the Possibilistic Setting

    Full text link
    This paper discusses belief revision under uncertain inputs in the framework of possibility theory. Revision can be based on two possible definitions of the conditioning operation, one based on min operator which requires a purely ordinal scale only, and another based on product, for which a richer structure is needed, and which is a particular case of Dempster's rule of conditioning. Besides, revision under uncertain inputs can be understood in two different ways depending on whether the input is viewed, or not, as a constraint to enforce. Moreover, it is shown that M.A. Williams' transmutations, originally defined in the setting of Spohn's functions, can be captured in this framework, as well as Boutilier's natural revision.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996

    Numerical Representations of Acceptance

    Full text link
    Accepting a proposition means that our confidence in this proposition is strictly greater than the confidence in its negation. This paper investigates the subclass of uncertainty measures, expressing confidence, that capture the idea of acceptance, what we call acceptance functions. Due to the monotonicity property of confidence measures, the acceptance of a proposition entails the acceptance of any of its logical consequences. In agreement with the idea that a belief set (in the sense of Gardenfors) must be closed under logical consequence, it is also required that the separate acceptance o two propositions entail the acceptance of their conjunction. Necessity (and possibility) measures agree with this view of acceptance while probability and belief functions generally do not. General properties of acceptance functions are estabilished. The motivation behind this work is the investigation of a setting for belief revision more general than the one proposed by Alchourron, Gardenfors and Makinson, in connection with the notion of conditioning.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995

    Estimations of expectedness and potential surprise in possibility theory

    Get PDF
    This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined

    Practical Model-Based Diagnosis with Qualitative Possibilistic Uncertainty

    Full text link
    An approach to fault isolation that exploits vastly incomplete models is presented. It relies on separate descriptions of each component behavior, together with the links between them, which enables focusing of the reasoning to the relevant part of the system. As normal observations do not need explanation, the behavior of the components is limited to anomaly propagation. Diagnostic solutions are disorders (fault modes or abnormal signatures) that are consistent with the observations, as well as abductive explanations. An ordinal representation of uncertainty based on possibility theory provides a simple exception-tolerant description of the component behaviors. We can for instance distinguish between effects that are more or less certainly present (or absent) and effects that are more or less certainly present (or absent) when a given anomaly is present. A realistic example illustrates the benefits of this approach.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995
    • …
    corecore